![]() systems and methods for image classification by correlating contextual indications with images
专利摘要:
Systems and methods for classifying images by correlating contextual indications with images The present invention relates to a set of sample images. Each image in the sample set can be associated with one or more social indications. The correlation of each image in the sample set with an image class is marked based on one or more social indications associated with the image. Based on the score, a set of training images for training a classifier is determined from the sample set. in one embodiment, an extent to which a set of evaluation images correlates with the image class is determined. The determination may comprise the classification of a higher score subset of the evaluation image set. 公开号:BR112016003926A2 申请号:R112016003926-2 申请日:2014-02-11 公开日:2019-11-12 发明作者:Paluri Balamanohar;Bourdev Lubomir 申请人:Facebook Inc; IPC主号:
专利说明:
“SYSTEMS AND METHODS FOR CLASSIFYING IMAGES THROUGH CORRELATION OF CONTEXTUAL INDICATIONS WITH IMAGES ” TECHNICAL FIELD [001] The technical field refers to the field of social networks. More particularly, the field of technique refers to image classification techniques on social networks. BACKGROUND [002] A social network can provide an interactive and content-rich online community that connects its members to each other. Members of a social network can indicate how they relate to each other. For example, members of a social network can indicate that they are friends, family members, business partners, or followers of each other, or members can assign some other relationship to each other. Social media can allow members to exchange messages with each other or post messages to the online community [003] Social media can also allow members to share content with each other. For example, members can create or use one or more pages containing an interactive feed that can be viewed across multiple platforms. Pages can contain images, video and other content that a member wants to share with certain members of the social network or publish on the social network in general. Members can also share content with the social network in other ways. In the case of images, members, for example, can post the images on an image board or make the images available for research by the online community. SUMMARY [004] A system can comprise at least one processor and a memory that stores instructions configured to instruct the processor to Petition 870170009199, of 02/10/2017, p. 6/69 2/57 receiving a set of sample images, in which each image in the sample set is associated with one or more social indications. The correlation of each image in the sample set with an image class is marked based on one or more social indications associated with the image. Based on the markup, a set of training images to train a classifier can be determined from the sample set. [005] In some modalities, the image class can be specified. In some embodiments, the determination may comprise classifying each image from the set of samples based on the markup. The determination may comprise selecting a top marking subset from the set of sample images. The top tagging subset can be the set of training images. [006] In several modalities, a classifier can be trained based on the set of training images. A visual pattern model associated with the image class can be generated. The classifier can be configured to use a histogram of visual word image classification technique or a neural network image classification technique in some modalities. [007] In some embodiments, an extent to which a set of evaluation images correlates with the image class can be determined The set of evaluation images may be different from the set of sample images. The evaluation image set may comprise a larger image set than the sample image set. [008] In several modalities, the correlation of each image in the set of evaluation images with a visual pattern model associated with the image class can be marked. Each image in the assessment set can be classified based on the correlation of markup for each image in the set of assessment images. A subset of the top marking of the set of Petition 870170009199, of 02/10/2017, p. 7/69 3/57 evaluation images can be associated with the image class. [009] In some modalities, one or more social statements may comprise one or more image tags. The number of instances of particular image tags among a total number of the or more image tags associated with an image can be determined. [010] In some modalities, one or more social indications may comprise one or more of: location data associated with an image from the set of sample images; or an image uploader identity, a bookmark, or an image owner from the sample image set. In several modalities, one or more social referrals can be received by a social network system. [011] A computer-implemented method may comprise receiving, through a computer system, a set of sample images, in which each image in the sample set is associated with one or more social cues. The method may include marking, through the computer system, the correlation of each image in the sample set with an image class based on one or more social indications associated with the image. The method may also include determining, through the computer system, based on the markup, a set of training images to train a classifier from the sample set. [012] A computer storage medium that stores executable instructions by computer that, when executed, cause a computer system to perform a computer-implemented method that comprises receiving a set of sample images, where each image in the set of samples is associated with one or more social statements. The method may include marking the correlation of each image in the sample set with an image class based on one or more social cues associated with the image. O Petition 870170009199, of 02/10/2017, p. 8/69 4/57 method may also include determining, based on marking, a set of training images to train a classifier from the set of samples. [013] Other resources and modalities are apparent from the attached drawings and from the following detailed description. BRIEF DESCRIPTION OF THE DRAWINGS [014] Figure 1 shows an example of a contextual image classification system, according to some modalities. [015] Figure 2A shows an example of an image classification module, according to some modalities. [016] Figure 2B shows an example of an image classification module, according to some modalities. [017] Figure 3 shows an example of an image classification training module, according to some modalities. [018] Figure 4 shows an example of an image classification evaluation module, according to some modalities. [019] Figure 5 shows an example of a classifier, according to some modalities. [020] Figure 6 shows an example of a process for classifying images, according to some modalities. [021] Figure 7 shows an example of a process for training a classifier, according to some modalities. [022] Figure 8 shows an example of a process for classifying images, according to some modalities. [023] Figure 9 shows an example of a visualization of contextually generated image filters applied to a group of images, according to some modalities. Petition 870170009199, of 02/10/2017, p. 9/69 5/57 [024] Figure 10 shows an example of a visualization of a contextually generated image filter applied to a group of images, according to some modalities. [025] Figure 11 shows an example of a network diagram of a contextual image classification system within a social network system, according to some modalities. [026] Figure 12 shows an example of a computer system that can be used to deploy one or more of the modalities described in this document, according to some modalities. [027] The Figures show various embodiments of the present invention for illustration purposes only, in which the Figures use similar reference numbers to identify similar elements. Someone skilled in the art will readily recognize from the following discussion that the alternative modalities of the structures and methods illustrated in the Figures can be employed without departing from the principles described in this document. DETAILED DESCRIPTION CLASSIFICATION OF IMAGES THROUGH THE CORRELATION OF CONTEXTUAL INDICATIONS [028] A social network system can provide users with the ability to generate content and share it with friends. Users of a photo sharing service from the social networking system can enjoy capturing images (for example, still images, memes), video or interactive content on their cell phones and sharing the content with their friends online. Similarly, users can enjoy sharing content with their friends, for example, by updating interactive feeds on their homepage. [029] A social networking system may also provide or support the ability to indicate, identify, categorize, label, describe, or otherwise Petition 870170009199, of 02/10/2017, p. 10/69 6/57 provide information about a content item or attributes about the content. One way to indicate such information is through a tag that can identify or, otherwise, refer to the subject of the content or its attributes. Another way to indicate such information is through the global positioning system (GPS) coordinates of a user who uploads content to identify the location of the upload, or where the content was captured. As described in more detail in this document, there are many other ways to display information about content on social networking systems. Many of these indicators, which include markings (for example, hashtag or other metadata markup) and GPS system coordinates, are non-visual, and are not based on automated analysis of visual data in the content. [030] In certain circumstances, non-visual indicators can be subjective or potentially misleading. For example, while the tags that a content generator chooses to apply its own content to can describe the subject of the content from the content generator's perspective, the tags can be considered to be erroneous or even irrelevant descriptions from the perspective of others. A user who posts a photo of himself dressed as the cat woman on Halloween, for example, can mark the photo as #cat, even if the photo does not contain a domestic cat. A user who posts an image of a dog called Paris can tag the photo with the tag #paris ”, even if the image does not show Paris, France. A user who posts images that are captured by his family from the Super Bowl in New Orleans on Super Bowl Sunday may have GPS coordinates and / or timestamps that indicate that the images were captured in the Super Bowl, however, the content of the images themselves may not refer to a football game. [031] Although the subjectivity of non-visual indicators helps users of social network systems to express and creatively share a rich Petition 870170009199, of 02/10/2017, p. 11/69 7/57 variety of content, the subjectivity of non-visual indicators often makes it difficult to search for images uploaded by user uploads, such as photographs. For example, an attempt to search for images posted on a social networking system for cats may reveal an image of a user in a Catwoman costume on Halloween. An attempt to carry out a graphic search for images of the Eiffel Tower in Paris, France, may lead to images of a dog called Paris ”. An attempt to search for Super Bowl photographs may reveal personal photographs of a fan family that may not be highly relevant to someone looking for accounts for a football game. In a way, the non-visual indicators associated with the images in these examples are noisy due to the fact that they may not reflect precisely the contents of the image to which they are associated. It may be desirable to accurately search user-uploaded content on social networking systems. [032] Figure 1 shows an example of a contextual image classification system 102, according to some modalities. The contextual image classification system 102 can be incorporated into a social network system, an example of which is provided in Figure 11. In the example in Figure 1, the contextual image classification system 102 can include an image classification module 104 and an image application module 106. [033] The image classification module 104 can recognize the subject in the content based on the contextual indications associated with the content and visual attributes of the content. Content can include, for example, images, memes, video, interactive audiovisual material, etc. A visual attribute can include a visual pattern in an image or an image segment that reflects a characteristic property of the subject shown in the content. Visual attributes can be based on one or a combination of, for example, appearance, color, Petition 870170009199, of 02/10/2017, p. 12/69 8/57 format, layout, etc. [034] A contextual indication may include a non-visual indicator of the subject shown in the content. A contextual indication can reflect or suggest the subject of at least a portion of the content. In some embodiments, a contextual indication may comprise a content tag. Contextual cues can also include other types of non-visual indicators of the subject in the content, such as social cues. For example, without limitation, contextual indications may include: coordinates of the user's global positioning system (GPS) or digital device, the number of markings in addition to a specified mark, the extent to which a specified mark has occurred in a series of tags, the other of a tag specified in a series of tags, the identity of a content tag (for example, an entity that associates strings with content), the identity of a content image loader (for example, an entity that provides content for storage in a social network system data store), the identity of the content owner, the time of uploading content transfer, connections and connection types (e.g. friends) of the label ( or the image uploader or the owner), the status or profile of the marker (or the image uploader or the owner), metadata associates content, identities of people who view or enjoy a certain type of content, Interchangeable Image File (EXIF) information, etc. [035] In some modalities, the image classification module 104 can train a classifier to recognize visual attributes of an image class based on the contextual indications collected from a set of sample images. A sample image set can include a group of images from which a training set is selected to train the Petition 870170009199, of 02/10/2017, p. 13/69 9/57 classifier. The sample image set can include a number of images large enough to ensure an accurate result by the classifier. The classifier can assign each item of content to a statistical tag that corresponds to the extent that the content spans a particular image class. In some modalities, the classifier may incorporate a hierarchical classifier, a linear classifier or another classifier. An example of a classifier is provided in Figure 5. In some modalities, the classifier can be initially trained based on a subset of selected images maintained by the social network system. The classifier can be retrained under various circumstances. For example, the classifier can be periodically retrained at a selected frequency or not periodically as images become available to the classifier. As another example, the classifier can be retrained upon the occurrence of certain events, such as events (for example, the Super Bowl) that are likely to cause a large number of images to be uploaded to the social network system. As yet another example, the classifier can be retrained when the social network system receives a limit on the number of new images. Retraining in these and other circumstances can refine the classifier's ability to recognize visual attributes of image classes. [036] An image class can include, for example, objects (for example, a cat, car, person, bag, etc.), brands or objects associated with brands (for example, Coca-Cola®, Ferrari®), professional sports teams (for example, Golden State Warriors®), locations (for example, Mount Everest), activities (for example, swimming), phrases or concepts (for example, a red dress, happiness), and anything else, action or notion that can be associated with the content. Although many examples provided in this document may refer to a single image class, it is noted that the image class can be Petition 870170009199, of 02/10/2017, p. 14/69 10/57 refer to a plurality of classes of images or one or more classes of images that comprise an integration of objects, brands, professional sports teams, locations, etc. [037] In some modalities, the image classification module 104 may use a trained classifier to compare visual attributes of a set of evaluation images with visual attributes of the image class and determine whether the visual attributes in a set of evaluation images can be sufficiently correlated with visual attributes of the image class. A set of evaluation images can include a group of images selected for classification by a classifier. In several embodiments, the set of assessment images can include all or a portion of the images in a data store, or all or a portion of the images in a social network system. In one embodiment, the classifier can be trained by any suitable technique, such as machine learning. [038] In several modalities, the image classification module 104 can provide classified content for the image application module 106. The classified content may include content that has been classified and / or marked by a classifier. In contrast, raw content or unclassified content may include content that has not been classified and / or marked by a classifier or otherwise associated with one or more classes of images. The classified content may have a mark indicating the extent to which the classified content corresponds to the image class. Higher-rated items of classified content may have higher degrees of correlation with the visual attributes of the image class. As a result, in several modalities, the image classification module 104 can allow efficient search of the classified content based on the markup. [039] The use of the classifier when analyzing a set of images from Petition 870170009199, of 02/10/2017, p. 15/69 11/57 evaluation for classification can occur at various times. For example, the classifier can analyze a set of evaluation images at a selected frequency or not periodically, as the images become available to the classifier. The classifier can also analyze a set of assessment images upon the occurrence of certain events, such as events that are likely to cause a large number of images to be uploaded to the social network system. The classifier can analyze a set of evaluation images when the social network system receives a limit number of new images. As yet another example, the classifier can analyze a set of evaluation images for classification before advance image searches are carried out. [040] Image application module 106 can adapt classified content for use in a social network system. In some embodiments, the image application module 106 can interface with search application programming interfaces (APIs) to make each item of classified content searchable according to its image class. For example, the image application module 106 can interface with a search module that searches for classified images that users have uploaded to a social network system. As another example, the image application module 106 can interface with a search module that searches feeds from a social network system for classified images or memes that users have posted to their feeds. The image application module 106 can also provide content classified in response to search queries. In some embodiments, the image application module 106 can extract topics associated with the classified images provided by the image classification module 104 using subject dictionaries, category trees and topic tagging techniques. Petition 870170009199, of 02/10/2017, p. 16/69 12/57 [041] Figure 2A shows an example of an image classification module 104, according to some modalities. The image classification module 104 may include an unclassified image data store 202, an image classification training module 204, and a classifier 208. In addition to the components shown in Figure 2A, the image classification module 104 also can include the components represented in Figure 2B. Note that similar elements in Figure 2A and Figure 2B may have similar reference numbers. [042] The data storage of unclassified images 202 can be coupled to the image classification training module 204 and the image classification evaluation module 206. The data storage of unclassified images 202 may contain unclassified images. Unclassified images may have contextual indications associated with them. Data stores can include any data organization, including tables, comma-separated value (CSV) files, traditional databases (for example, SQL), or other known or convenient organizational formats. In some embodiments, storing unclassified image data 202 may also store a set of contextual indications associated with the images, such as markings or other indications. In some embodiments, the data storage of unclassified images 202 may represent a portion or all of the unclassified images in a social network system. [043] The image classification training module 204 can be coupled to the storage of unclassified image data 202 and to the classifier 208. In some modalities, the image classification training module 204 can implement a training phase. The training phase can include a phase of the image classification module 104 in the Petition 870170009199, of 02/10/2017, p. 17/69 13/57 which image classification training module 204 trains classifier 208 to recognize visual attributes of selected images from a set of sample images. During the training phase, the image classification training module 204 can obtain a set of sample images from the unclassified image data store 202. The image classification training module 204 can also collect a set of images contextual indications associated with each unclassified image obtained. Based on the set of contextual indications, a set of training images selected from the set of sample images can be used to train the classifier 208 to recognize visual patterns. Unclassified images and / or contextual indications can be obtained by consulting the data storage of unclassified images 202 with the relevant information to determine the set of training images from the set of sample images. [044] During the training phase, the image classification training module 204 can be configured to specify one or more image classes with which to train the classifier 208. To specify the image class, the classification training module 204 can receive the automated input that defines the image class. Specifying the image class can also involve manually entering a person, such as an administrator in charge of classifying images. [045] During the training phase, the image classification training module 204 can be configured to identify and select contextual indications that correspond to the image class. In various modalities, the image classification training module 204 can evaluate the attributes of a particular image class and can determine whether certain contextual indications are likely to be associated with that image class. Per Petition 870170009199, of 02/10/2017, p. 18/69 14/57 example, the image classification training module 204 can determine that one type of tag is likely to accompany a photo of a domestic cat, while another type of tag is likely to accompany a photo of a user in a costume. catwoman on Halloween. In such a case, the image classification training module 204 can select the type of markings that are likely to accompany a photo of a domestic cat that corresponds to a cat's image class. As discussed in more detail in this document, the consideration of whether contextual indications apply to a particular image class can be based on many considerations, such as markings (for example, #cat tag, #Halloween tag, etc.) , the order of the tags, if particular tags are accompanied by other particular tags (for example, if the #gato tag is accompanied by the #animal tag or if the #gato tag is accompanied by the #Halloween tag), etc. The image classification training module 204 can also be configured to classify and / or mark the extent to which the contextual indications associated with a particular image correspond to a particular image class. [046] Contextual indications are analyzed to identify a set of training images from the set of sample images. The set of training images represents images that are more closely correlated with an image class. During the training phase, the image classification training module 204 can provide the set of training images for the classifier 208 to identify visual attributes associated with the image class. In some embodiments, the image classification training module 204 may instruct the classifier 208 to create a model of a visual pattern that corresponds to a particular image class. In some modalities, the classifying training module Petition 870170009199, of 02/10/2017, p. 19/69 15/57 images 204 can, if desired, store classified images and / or related visual pattern models in one location, such as the storage of classified image data 210. The image classification training module 204 can additionally use manual annotators to help select the set of training images. The image classification training module 204 is further discussed in the context of Figure 3 and Figure 7. [047] The classifier 208 can be coupled to the image classification training module 204 and to a classified image data store (for example, the classified image data store 210 shown in Figure 2B). The classifier can receive images from the image classification training module 204. In the training phase, classifier 208 can evaluate a set of training images for the presence of particular visual patterns. The classifier 208 can associate particular visual patterns with image classes, can create visual pattern models, and can have visual pattern models stored. In the training phase, classifier 208 can return the images used for training to the image classification training module 204. In some embodiments, classifier 208 may include a return connection (for example, through a feedback loop) in the image classification training module 204. As a result, the classifier 208 can aid the accuracy of the image classification training module 204. Such a feedback connection can help improve future classification and training. Classifier 208 is further discussed in the context of Figure 5. [048] Figure 2B shows an example of an image classification module 104, according to some modalities. The image classification module 104 may include the storage of unclassified image data 212, an image classification evaluation module 206, the classifier 208, and Petition 870170009199, of 02/10/2017, p. 20/69 16/57 a data storage of classified images 210. The data storage of unclassified images 212 can be coupled to the image classification evaluation module 206. The data storage of unclassified images 212 can store unclassified images. The data storage of unclassified images 212 may, however, need not be the same as the data storage of unclassified images 202, shown in Figure 2A. [049] The image classification evaluation module 206 can be coupled to the data storage of unclassified images 212 and to the classifier 208. In some modalities, the image classification evaluation module 206 may implement an evaluation phase. An evaluation phase can include an image classification module phase 104 in which the image classification evaluation module 206 uses the classifier 208 to recognize visual patterns in a set of evaluation images. In one embodiment, the set of evaluation images can be selected from the data storage of unclassified images 212. [050] During the evaluation phase, the image classification evaluation module 206 can provide the set of evaluation images from the data storage of unclassified images 212 to the classifier 208. In several modalities, the training module image classification system 204 may have, in the training phase, trained the classifier 208 to recognize visual attributes of a set of training images associated with an image class. The assessment image set may comprise a different image set than the sample image set and the training image set. In some embodiments, the image classification evaluation module 206 can also provide the classifier 208 with an image class that the classifier 208 must compare with the set of images of Petition 870170009199, of 02/10/2017, p. 21/69 17/57 evaluation. In various modalities, the image classification evaluation module 206 can instruct the classifier 208 to classify and / or mark the set of evaluation images based on the correlation with the image class. The image classification evaluation module 206 can also store the classification and marking of images (that is, images that have been marked based on correlation with the image class) in the storage of classified image data 210. The evaluation module of image classification 206 is further discussed in the context of Figures 4 and 8. [051] The classifier 208 can be coupled to the image classification training module 204 and the image classification evaluation module 206. The classifier 208 can receive images from the image classification evaluation module 206. In the evaluation phase, the classifier 208 can perform visual pattern recognition on a set of evaluation images to mark the correlation between each image and visual pattern models associated with an image class of interest. In the evaluation phase, the classifier 208 can return the images used in the evaluation to the image classification evaluation module 206. The classifier 208 is further discussed in the context of Figure 5. [052] Classified image data storage 210 can be coupled to classifier 208 and image application module 106. Classified image data storage 210 can store information that includes classified images, image classes, visual pattern models and other information. In some embodiments, data storage of classified images 210 can be indexed to facilitate efficient searches of images classified by APIs that seek access to classified images. For example, the classified image data storage 210 can be configured to be compatible with a coupled search module Petition 870170009199, of 02/10/2017, p. 22/69 18/57 to the image application module 106 that seeks to access the classified images. [053] Figure 3 shows an example of an image classification training module 204, according to some modalities. The image classification training module 204 may include a training image selection module 301, the storage of training image data 309 and a classifier training module 310. [054] The training image selection module 301 can be coupled to the storage of unclassified image data 202 and the storage of training image data 309. The training image selection module 301 can identify a set of images training from the set of sample images. The 301 training image selection module can also store the training image set in the 309 training image data store. The 301 training image selection module can include a 302 training image collection module, a contextual indication extraction module 304, an image class specification module 306 and an image class correlation module 308. [055] The training image collection module 302 can be coupled with the other modules of the training image selection module 301. In some modalities, the training image collection module 302 can collect a set of sample images together with contextual indications associated with the set of sample images. The sample image set and associated contextual indications can be retrieved from the unclassified image data store 202. [056] The 304 contextual indication extraction module can be coupled with other modules of the 301 training image selection module. Petition 870170009199, of 02/10/2017, p. 23/69 19/57 extraction of contextual indication 304 can be configured to extract contextual indications associated with the set of sample images. As discussed in this document, contextual indications may include non-visual indicators of the contents of an image. Examples of contextual directions for an image may include image tags for the image, GPS coordinates of a device that captures the image, the identities of the marker, image uploader, and image owner, the identities of the image owners, others information directly or indirectly related to the image, etc. In some embodiments, the contextual indication extraction module 304 can provide the image class correlation module 308 with the set of contextual indications, so that the image class correlation module 308 can correlate the contextual indications with a class of Image. [057] The image class specification module 306 can be coupled to the other modules of the training image selection module 301. The image class specification module 306 can be configured to specify an image class that the classifier 208 is trained to recognize. In some embodiments, the image class specification module 306 may receive an instruction to specify the image class from an administrator, which can be human or automated. In several modalities, specifying an image class may involve creating an image class, if one does not exist, or designating an image class if such an image class exists. [058] The 308 image class correlation module can be coupled to the other modules of the 301 training image selection module. The 308 image class correlation module can receive one or more of the sample image set, along with associated contextual cues, the Petition 870170009199, of 02/10/2017, p. 24/69 20/57 from the contextual indication extraction module 304, and can receive a specified image class from the image class specification module 306. The image class correlation module 308 can determine the extent to which the indications contextual features of a particular image will correlate with a specific image class. More specifically, the image class correlation module 308 can assign to each image a mark or value that indicates the likelihood that a given image will correlate with the image class. In some embodiments, the image class correlation module 308 can also classify each of the sample image sets based on the markup for each image. In some embodiments, the image class correlation module 308 may select a training set from the sample image set, such as the highest-tagged images in the sample image set, to provide to the training module classifier 310. Advantageously, the image class correlation module 308 does not need to perform any visual recognition of the content in the sample image set. [059] The following discussion provides examples of how the image class correlation module 308 can determine the likely image content based on the contextual indications associated with the images. As provided in the following discussion, the image class correlation module 308 can analyze the markings themselves, the syntax of the markings, or can perform other types of analysis on the contextual indications extracted by the 304 contextual indication extraction module. image class correlation 308 can also provide any combination of the following examples to correlate, based on contextual indications, the sample image set with a specified image class to determine a set of training images. Petition 870170009199, of 02/10/2017, p. 25/69 21/57 [060] In some embodiments, the image class correlation module 308 can analyze the image tag syntax of the sample image set. The image class correlation module 308 can determine how likely a specific syntax is to correlate with a given image class. In some embodiments, parsing of image tags may involve assigning weights to the exact language of the image tags. That is, the image class correlation module 308 can determine that the exact tag words associated with an image indicate that the tags must be correlated with an image class. For example, an image can be tagged with the image tag #domestic housecat ”. The image class correlation module 308 can determine that a # domestic housecat tag correlates to a high degree with an image class for domestic cat images. As another example, the image class correlation module 308 can determine that a #domestic house market tag correlates to a low degree with the image class for domestic cats. [061] The image class correlation module 308 can also analyze the order of a particular image tag in a series of image tags associated with an image. For example, a person looking to tag a picture of a domestic cat may include the following series of tags: #cat, #athome, #Sunday, #animal ”. The image class correlation module 308 can identify that the #cat tag is the first tag in the series of tags and, therefore, that the image is probably an image of a domestic cat. The image class correlation module 308 can give the series of marks a weight in correlating the image with the image class. Note that in order to correlate an image to an image class, the image class correlation module 308 can also be responsible for position markings Petition 870170009199, of 02/10/2017, p. 26/69 22/57 different from the first position in a series of markings. [062] The image class correlation module 308 can also analyze whether multiple image tags are synonymous with each other. For example, suppose that a first image was tagged as follows: #cat, #athome, #Halloween, #Catwoman, #costume, #DC Comics® ”. Suppose further that the second image was marked as follows: #cat, #mammal, #animal, #housecat, #tomcat, #feline, #cute ”. The image class correlation module 308 can determine, based on some of the markings in the first image, that the series of markings are not synonymous with each other, and that the first image is not likely to contain an image of a domestic cat. The image class correlation module 308 can further determine, based on some of the markings in the second image, that the series of markings are synonymous (for example, cat ”, tomcat” and feline), and that the second image is more likely first image to contain an image of a domestic cat. Consequently, the image class correlation module 308 can assign a mark for the second image higher than the mark of the first image in relation to the image class of domestic cats. [063] In some embodiments, the image class correlation module 308 can evaluate a plurality of image tags for the absence of an antonym or divergent meanings. For example, suppose that a first image was marked as follows: #blackcar ”, #whitecar”, #luxurycar ”, #My Mercedes. Suppose that a second image was marked as follows: #blackcar ”, #darkcar”, #luxurycar ”, #My Mercedes”. The image class correlation module 308 can determine, based on the fact that the second series of markings is devoid of an antonym for blackcar ”, that the second image correlates to a high degree with an image class that corresponds to a black car. The image class correlation module 308 Petition 870170009199, of 02/10/2017, p. 27/69 23/57 can assign a mark for the second image larger than a mark for the first image in relation to the black car image class. [064] The image class correlation module 308 can, in some modalities, evaluate the relationship of an image tag based on an ontology or a language hierarchy. For example, the image class correlation module 308 can develop an ontology of one or more words from an online source (for example, WordNet), and can find words with emotional impact (for example, happy ”, sad” , red dress ”, black car). As another example, suppose that an image was marked as follows: #cat, #mammal, #animal, #housecat, #tomcat, #feline, #cute ”. The image class correlation module 308 can determine that cat is a part of the family of items identified by mammal "which, in turn, is part of the family of items identified by animal". As a result, the image class correlation module 308 can, in such an example, identify the markings in order to provide a reliable indicator of a domestic cat shown in the image. The image class correlation module 308 can then mark the image accordingly in relation to the image class of domestic cats. [065] In several modalities, the image class correlation module 308 can perform another natural language analysis of marking words and phrases. In some embodiments, the image class correlation module 308 may be responsible for misspellings of tag words. The image class correlation module 308 may also be responsible for languages other than English, which include looking for the presence of non-English words in conjunction with English correspondents (for example, #cat and #gato in the same series of markings) . In each of these examples, the image class correlation module 308 can mark a set of images appropriately in relation to a particular image class. Petition 870170009199, of 02/10/2017, p. 28/69 24/57 [066] In addition to analyzing the syntax of image tags, the image class correlation module 308 can analyze social cues related to the generation and / or marking of images. For example, image class correlation module 308 can analyze location data associated with the sample image set. More specifically, the image class correlation module 308 can evaluate the GPS coordinates of an image. For example, GPS coordinates can be taken from a GPS transceiver on a user's mobile device, or they can be taken from the image's geotagging when or after the image is uploaded. Using the location data of a particular image, the image class correlation module 308 can assign a mark to the particular image in relation to a specific image class. For example, the image class correlation module 308 can determine from location data that an image was taken near the Golden Gate Bridge in San Francisco. The image class correlation module 308 can then assign a mark to the image in relation to a bridge image class. [067] In some embodiments, the image class correlation module 308 can analyze social cues that include the identity of the generator of a particular image in the sample image set. The image class correlation module 308 can assign markings depending on whether particular entities are likely to generate images in a particular image class. For example, if an entity has historically generated many images in a particular image class, the image class correlation module 308 can mark a particular image of the entity to reflect a high degree of correlation with the image class. [068] In some modalities, the image class correlation module 308 can analyze social cues that indicate whether the owner of a Petition 870170009199, of 02/10/2017, p. 29/69 25/57 among the set of sample images was the image generator or an image marker. In some embodiments, the image class correlation module 308 can determine whether the entity owns an image or intellectual property rights for the generated image or has tagged the image. For example, in relation to a trademark image class (for example, a Coca-Cola® logo), the image class correlation module 308 can provide an image with a mark that reflects a high degree of correlation whether the image was generated or marked by the entity that owns the trademark (for example, Coca-Cola®). Such a weighting scheme can prove to be particularly advantageous for recognizing trademark images or logos. [069] In some modalities, the image class correlation module 308 can analyze social cues that indicate the status or profile of a person in a social network system. The person can be a generator and / or marker for a particular image in the sample image set. For example, the image class correlation module 308 can determine, based on the person’s activities (e.g., past posts, status updates, friendships, message history, past tagging history, past image generation history, browsing history, online profile, etc.), or relationships (for example, friends), if the person is likely to be a reliable image generator and / or marker. The image class correlation module 308 can assign multiple tags to one or more images based on whether the person's status or profile indicates that the person is a reliable image generator and / or marker both in general and in relation to a particular image class. [070] In several modalities, the image class correlation module 308 can analyze social indications that indicate the presence or absence of other Petition 870170009199, of 02/10/2017, p. 30/69 26/57 image classes. For example, the image class correlation module 308 can determine that an image includes a boat, and as a result, the image is unlikely to be indoors. The presence of contextual indications (for example, a hashtag #indoors) may indicate that the image is unlikely to contain a boat. As another example, the image class correlation module 308 may determine that the presence of one object in an image may mean that another object may be present in the image or not. For example, the presence of contextual cues that indicate cows in the image may allow the image class correlation module 308 to provide a negative correlation for airplanes, given the fact that the image is unlikely to contain both cows and airplanes. As another example, the presence of a contextual indication that indicates a chicken in an image may allow the image class correlation module 308 to provide a positive correlation for other chickens in the image, given the fact that an image containing a chicken contains other chickens. Likewise, the presence of a sheep can indicate a positive correlation for a sheepdog. [071] The image class correlation module 308 can analyze a variety of social cues, which include, but are not limited to, those discussed above and one or more among: if the image uploader is the owner of the image, the image file type, metadata in or associated with the image, the identity of an image's tanners, the sophistication or experience of users who viewed a tag or image, whether the image was previously classified into an image class (and if so, what was the image markup for that image class), etc. Other social indications, in addition to those expressly discussed in the present document, can be analyzed by the image class correlation module 308 to correlate and mark images with an image class. The image class correlation module Petition 870170009199, of 02/10/2017, p. 31/69 27/57 308 can assign multiple tags to multiple images based on the extent to which the contextual indications of those images correlate to a high degree with one or more classes of images. [072] In addition to assigning tags to the images, the image class correlation module 308 can also classify the sample image set in relation to one or more image classes. For example, the image class correlation module 308 can classify each image in the sample image set based on the markup of each image in relation to an image class. The classification may reflect the relative correlation of each image with the image class. The image class correlation module 308 can use the weights, markings and / or classifications to generate the set of training images. [073] The 309 training image data store can receive the training image set from the 301 training image selection module. The 309 training image data store can store the training image set. In some embodiments, storing training image data 309 may allow the classifier training module 310 to access the training image set. [074] The classifier training module 310 can be coupled to the storage of training image data 309 and the classifier 208. In various modalities, the classifier training module 310 can provide the set of training images to train the classifier 208 to recognize an image class. The set of training images can be limited to a limit number of the highest ranked images in the sample set in relation to a particular image class. The limit number of the highest rated images can be represented by a K value, where K Petition 870170009199, of 02/10/2017, p. 32/69 28/57 is any integer value. For example, suppose that the training image collection module 302 collected 1,000 images as the sample image set to finally train the classifier 208 to recognize an image class for domestic cats. Suppose further that the image class correlation module 308 assigned markings to 200 of those images which were higher than the markings assigned to the remaining 800 images. In such a case, the classifier training module 310 can provide only 200 images of superior marking to the classifier 208 as the set of training images, so that the classifier 208 can efficiently determine the visual attributes of images that correlate highly with the image class for domestic cats. [075] Figure 4 shows an example of an image classification evaluation module 206, according to some modalities. The image rating evaluation module 206 may include an evaluation image collection module 402, a classifier input module 404, a visual pattern template collection module 405, a classifier tag receiving module 406, a visual properties classification module 408 and an evaluated image delivery module 410. [076] The evaluation image collection module 402 can be coupled with the other modules of the image classification evaluation module 206. In some modalities, the evaluation image collection module 402 can collect a set of evaluation images for image sorting from the unclassified image data store 202. The assessment image set can be the same or different from the sample image set and the training image set. The evaluation image set may comprise a different number of images (for example, a larger number of images) from the sample image set and the image set from Petition 870170009199, of 02/10/2017, p. 33/69 29/57 training. In several modalities, the set of evaluation images can be obtained at random or selectively from the data storage of unclassified images 202. [077] The visual standard model collection module 405 can be coupled with the other modules of the image classification evaluation module 206 and the image classification training module 204. In some modalities, the model collection module visual pattern 405 can receive from the image classification training module 204, visual pattern models that correspond to a particular image class. The visual pattern template collection module 405 can additionally provide visual pattern templates for the classifier input module 404. [078] The classifier input module 404 can be coupled to the other modules of the image classification evaluation module 206. In some embodiments, the classifier input module 404 can receive from the evaluation image collection module 402 the set of evaluation images. The classifier input module 404 can also receive from the visual pattern template collection module 405, a visual pattern template that corresponds to a particular image class. The classifier input module 404 can instruct classifier 208 to attempt to recognize the visual pattern model in each of the sets of assessment images. [079] The classifier tag receiving module 406 can be coupled to the other modules of the image sorting evaluation module 206. In some embodiments, the classifier tag receiving module 406 can receive from the classifier 208 markings that indicate the extent to which particular images in the assessment image set correlate with the visual pattern model. [080] The visual properties classification module 408 can be Petition 870170009199, of 02/10/2017, p. 34/69 30/57 coupled to the other modules of the image classification evaluation module 206. In various modalities, the visual properties classification module 408 can classify the set of evaluation images based on the extent to which the markings of each of the sets evaluation images correlate with the visual pattern model. In some modalities, the visual properties classification module 408 can provide a set of reclassified images, which constitute a set of classified images for indexing or searching. [081] The evaluated image supply module 410 can be coupled with the other modules of the image classification evaluation module 206. In some modalities, the evaluated image supply module 410 can receive images classified or reclassified from the module of visual properties classification 408. The evaluated image supply module 410 can provide the reclassified images for storing classified image data 210 along with an index or other information that reflects the extent to which each reclassified image correlates with a class of specified image. [082] Figure 5 shows an example of a 208 classifier, according to some modalities. The classifier 208 may include a visual pattern creation module 502, a visual pattern recognition module 504 and a classified image interface module 506. [083] The visual pattern creation module 502 can be coupled with the visual pattern recognition module 504. The visual pattern creation module 502 can receive, during the training phase of classifier 208, a set of training images to from the image classification training module 204, and create a visual pattern model of features that are common to the set of training images associated with the class of Petition 870170009199, of 02/10/2017, p. 35/69 31/57 picture. To create the visual pattern model, the visual pattern creation module 502 can implement a visual pattern recognition algorithm, such as a histogram model of words in computer or technical vision that counts the occurrence of a vocabulary of image resources locations in each of the training image sets. In one embodiment, the visual pattern creation module 502 can break an image into segments, and can evaluate each segment of the image for the presence of visual aids. The visual pattern creation module 502 can additionally extract the visual resources identified in each segment of the image and can represent the visual resources as vectors. Using vectors, the visual pattern creation module 502 can create a visual pattern model of features that are common to the set of training images. [084] In various modalities, the visual pattern creation module 502 can create a visual pattern model based on the features that are most commonly found in the set of training images. For example, in these modalities, the image classification training module 204 can provide the visual pattern creation module 502 with an integer K number of images, and the visual pattern creation module 502 can recognize features that are most commonly found in the K images. [085] The visual pattern recognition module 504 can be coupled to the visual pattern creation module 502. The visual pattern recognition module 504 can receive, during the evaluation phase, a set of evaluation images from the module image classification assessment 206, and can identify the extent to which each of the assessment image sets correlates with a particular visual pattern model associated with an image class. In some embodiments, the visual pattern recognition module 504 may employ visual pattern recognition, such as a Petition 870170009199, of 02/10/2017, p. 36/69 32/57 word histograms Visual pattern recognition can comprise a neural network image classification technique, in some modalities. The visual pattern recognition module 504 can determine the various visual resources within the set of evaluation images and, for each image, represent the visual resources extracted as a set of vectors. The visual pattern recognition module 504 can also compare vectors for each of the sets of assessment images with various visual pattern resource models. In some modalities, the visual pattern recognition module 504 can mark the extent to which each of the sets of assessment images correlates with the various visual pattern models generated during the training phase. [086] The classified image interface module 506 can be coupled to the visual pattern creation module 502 and the visual pattern recognition module 504. In some embodiments, the classified image interface module 506 can receive classified images, together with its markings, from the visual pattern recognition module 504. The classified image interface module 506 can provide classified images and / or the markings for the storage of classified image data 210. The classified image interface module 506 can also provide visual standard models for storing 210 classified image data. [087] Figure 6 shows an example of a 600 process for classifying images, according to some modalities. Process 600 is discussed in conjunction with the image classification module 104 shown in Figure 2A. Process 600 may include a training phase 600a and an evaluation phase 600b. In block 602 of process 600, the image classification training module 204 can collect a set of sample images from the unclassified image data store 202. In block 604, the Petition 870170009199, of 02/10/2017, p. 37/69 33/57 image classification training module 204 can collect contextual cues associated with the sample image set. In block 606, the image classification training module 204 can use the contextual indications to mark and order the images based on their correlation with an image class of interest to create a set of training images, as discussed in this document. . Based on marking and ordering, a set of training images can be determined. In block 607, the image classification training module 204 can train the classifier 208 based on the training set. In block 608, the image classification evaluation module 206 can collect a set of evaluation images from the data storage of unclassified images 202. In block 610, the classifier 208 can compare visual attributes of the evaluation image set with a set of visual pattern templates associated with the image class. In block 612, classifier 208 can determine whether each image in the set of evaluation images is within the image class based on the comparison. [088] Figure 7 shows an example of a process 700 for training a classifier, according to some modalities. Process 700 is discussed in conjunction with the image classification training module 204 shown in Figure 3. In block 701, the image class specification module 306 can specify an image class to train the classifier 208 to recognize. In block 702, the training image collection module 302 can receive a set of sample images, where each of the sample image sets has associated contextual indications. In block 704, the contextual indication extraction module 304 can extract contextual indications from the set of sample images. In block 708, the image class correlation module 308 can mark the correlation of each image in the set of Petition 870170009199, of 02/10/2017, p. 38/69 34/57 sample images with the image class based on one or more contextual indications associated with the image. In block 710, the image class correlation module 308 can classify the set of sample images based on the markup of each image. In block 712, the image class correlation module 308 can determine a set of training images from the sample set to train the classifier 208. In some embodiments, determining a training set may comprise classifying each image in the set of samples based on marking. The determination may also comprise selecting a top marking subset from the set of sample images. The top tagging subset can comprise the set of training images. In block 714, the classifier training module 310 can train the classifier 208 to identify common visual patterns in the set of training images. [089] Figure 8 shows an example of an 800 process for classifying images, according to some modalities. Process 800 is discussed together with the image classification evaluation module 206 shown in Figure 4, and the classifier 208 shown in Figure 5. In block 802, the evaluation image collection module 402 can collect a set of images of evaluation from the storage of unclassified image data 202. In block 804, the evaluation image collection module 402 can determine an image class to evaluate the visual properties of the evaluation image set. In some embodiments, the classifier input module 404 can provide the set of evaluation images and the image class for classifier 208. In block 806, the visual pattern recognition module 504 can mark the correlation of each image in the set of evaluation images with a visual standard model associated with the image class. In block 808, the visual pattern recognition module 504 can classify each image of the Petition 870170009199, of 02/10/2017, p. 39/69 35/57 set of evaluation images based on the marked correlation of each image in the set of evaluation images. In block 810, the visual pattern recognition module 504 can associate a subset of the upper marking of the set of evaluation images with the image class. In some embodiments, the classified image interface module 506 can then provide the upper markup subset for several other modules in a social networking system. [090] Figure 9 shows an example of a 900 visualization of a contextually generated classification of a group of images by the image classification module 104, according to some modalities. Display 900 includes a group of unclassified images 902, a first group of classified images 904, a second group of classified images 906, a third group of classified images 908, and a fourth group of classified images 910. In the example in Figure 9 , the unclassified image group 902 includes an image group that has contextual indications associated with it. Contextual directions can include image tags and other contextual information. To produce the first group of classified images 904, the second group of classified images 906, the third group of classified images 908 and the fourth group of classified images 910, one or more of the groups of unclassified images 902 were provided for the image classification training 204. The image classification training module 204, during the training phase, used sets of unclassified image groups 902, classified based on contextual indications, to train classifier 208 to recognize associated visual attributes to four classes of images, namely: a first class of image of images that have a synthetic appearance, a second class of image of approximate images, a third class of image of images taken in the air Petition 870170009199, of 02/10/2017, p. 40/69 36/57 free, and a fourth image class of images that depict water. The image classification evaluation module 206, during an evaluation phase, provided the group of unclassified images 902 for the classifier 208 which was trained to compare visual attributes of the group of unclassified images 902 with visual standard models associated with the four image classes. The outputs of classifier 208 corresponded to the four image classes that classifier 208 was trained to recognize. More specifically, classifier 208 produced the first group of classified images 904, which corresponds to the first image class; the second group of images classified 906, which corresponds to the second image class; the third group of images classified 908, which corresponds to the third class of image; and the fourth group of classified images 910, which corresponds to the fourth image class. [091] Figure 10 shows an example of a visualization 1000 of an image filter classification generated contextually from a group of images by the image classification module 104, according to some modalities. View 1000 includes a group of unrated images 1002 and a group of classified images 1004. The group of unrated images 1002 can represent a portion of unrated images on a social network system. A set of sample images was associated with the markings and other contextual indications. Based on their contextual indications, each image in the sample image set was marked based on its correlation with an image class of interest. In this example, cat is the image class of interest. The images in the sample image set were then classified based on their markings. In this example, the 200 highest rated images in the sample image set were designated as a set of training images. The set of training images was then applied to train a classifier to recognize common visual patterns Petition 870170009199, of 02/10/2017, p. 41/69 37/57 depicted in the images. Visual pattern models were generated based on the training set and associated with the cat image class. The group of unclassified images 1002 was used as a set of evaluation images. The set of evaluation images was applied to the classifier to mark its correlation with the visual pattern models associated with the cat image class. The markings on the set of evaluation images were classified, and the highest rated images from the set of evaluation images were selected as the group of classified images 1004. SOCIAL NETWORK SYSTEM — EXAMPLE IMPLEMENTATION [092] Figure 11 is a network diagram of an exemplary 1100 social network system in which to implement the contextual image classification system 102, according to some modalities. The social networking system 1100 includes one or more user devices 1110, one or more external systems 1120, a social networking system 1130 and a network 1150. In one embodiment, the social networking system discussed in connection with the modalities described above can be deployed as the social networking system 1130. For purposes of illustration, the social networking system 1100 mode, shown in Figure 11, includes a single external system 1120 and a single user device 1110. However, in other embodiments, the social networking system 1100 may include more user devices 1110 and / or more external systems 1120. In certain embodiments, the social networking system 1130 is operated by a social networking system provider, through which external systems 1120 they are separated from the 1130 social network system by the fact that they can be operated by different entities. In several modalities, however, the 1130 social networking system and the 1120 external systems work together to provide social networking services to users (or members) of the 1130 social networking system. Petition 870170009199, of 02/10/2017, p. 42/69 38/57 sense, the 1130 social networking system provides a primary platform or support, which other systems, such as external 1120 systems, can use to provide social networking services and functionality to users over the Internet. [093] The 1110 user device comprises one or more computing devices that can receive input from a user and transmit and receive data over the 1150 network. In one embodiment, the 1110 user device is a conventional computer system that runs, for example, an operating system compatible with Microsoft Windows (OS), Apple OS X, and / or a Linux distribution. In another embodiment, user device 1110 can be a device that has computer functionality, such as a smart phone, a tablet-type computer, a personal digital assistant (PDA), a cell phone, etc. The 1110 user device is configured to communicate over the 1150 network. The 1110 user device can run an application, for example, a navigation application that allows a user of the 1110 user device to interact with the 1130 social networking system. In another embodiment, the 1110 user device interacts with the 1130 social networking system through an application programming interface (API) provided by the 1110 user device's native operating system, such as iOS and ANDROID. User device 1110 is configured to communicate with external system 1120 and social network system 1130 over network 1150, which can comprise any combination of local and / or wide area networks, using communication systems with and / or wireless. [094] In one embodiment, the 1150 network uses standard communications technologies and protocols. In this way, the 1150 network can include links that use technologies such as Ethernet, 802.11, worldwide interoperability for microwave access (WiMAX), 3G, 4G, CDMA, GSM, LTE, digital signature line (DSL), etc. . Similarly, the network protocols used on the 1150 network can include Petition 870170009199, of 02/10/2017, p. 43/69 39/57 multiprotocol label switching (MPLS), transmission control protocol / Internet protocol (TCP / IP), User Datagram Protocol (UDP), hypertext transport protocol (HTTP), simple mail transfer protocol (SMTP), file transfer protocol (FTP), and the like. The data exchanged through the 1150 network can be represented using technologies and / or formats that include hypertext markup language (HTML) and extensible markup language (XML). In addition, all of the links can be encrypted using conventional encryption technologies, such as secure socket layer (SSL), transport layer security (TLS), and Internet Protocol security (IPsec). [095] In one embodiment, user device 1110 can display content from external system 1120 and / or from social network system 1130 by processing a markup language document 1114 received from external system 1120 and from the 1130 social networking system using an 1112 navigation application. The 1114 markup language document identifies content and one or more instructions that describe the formatting or presentation of the content. By executing the instructions included in the markup language document 1114, the navigation application 1112 displays the content identified using the format or presentation described by the markup language document 1114. For example, the markup language document 1114 includes instructions for generating and displaying a web page that has multiple frames that include text and / or image data retrieved from external system 1120 and social network system 1130. In various embodiments, the markup language document 1114 comprises a data file that includes extensible markup language (XML) data, extensible hypertext markup language (XHTML) data, or other markup language data. In addition, the markup language document 1114 Petition 870170009199, of 02/10/2017, p. 44/69 40/57 can include JavaScript Object Notation (JSON), JSON with padding (JSONP) and JavaScript data to facilitate data exchange between external system 1120 and user device 1110. The navigation application 1112 on the device user 1110 can use a JavaScript compiler to decode the 1114 markup language document. [096] The 1114 markup language document can also include or link to, applications or application frameworks, such as FLASHTM or UnityTM applications, the SilverLightTM application framework, etc. [097] In one embodiment, the user device 1110 also includes one or more cookies 1116 that include data that indicates whether a user of the user device 1110 is registered with the 1130 social network system, which may allow modification of the data communicated to from the social networking system 1130 to the user device 1110. [098] External system 1120 includes one or more web servers that include one or more web pages 1122a, 1122b, which are communicated to user device 1110 using the 1150 network. External system 1120 is separate from the system social network 1130. For example, external system 1120 is associated with a first domain, while social system 1130 is associated with a separate social network domain. Web pages 1122a, 1122b, included in external system 1120, comprise markup language documents 1114 that identify content and that include instructions that specify the formatting or presentation of the identified content. [099] The social networking system 1130 includes one or more computing devices for a social networking system, which includes a plurality of users, and endows the users of the social networking system with the ability to communicate and interact with other users of the social networking system. In some cases, the social network system may be represented by a graph, that is, a data structure Petition 870170009199, of 02/10/2017, p. 45/69 41/57 which includes edges and nodes. Other data structures can also be used to represent the social network system, which include, but are not limited to, databases, objects, classes, meta-elements, files, or any other data structure. The 1130 social networking system can be administered, managed or controlled by an operator. The operator of the 1130 social networking system can be a human being, an automated application, or a series of applications to manage content, regulate policies and collect usage metrics within the 1130 social networking system. Any type of operator can be used. [0100] Users can join the 1130 social networking system and then add connections to any number of other users of the 1130 social networking system to whom they wish to be connected. As used in this document, the term friend refers to any other user of the 1130 social networking system with whom a user has formed a connection, association or relationship through the 1130 social networking system. For example, in one embodiment, if users in the social network system 1130 are represented as nodes in the social graph, the term friend can refer to a border formed between and directly connecting two user nodes. [0101] Connections can be added explicitly by a user or they can be automatically created by the 1130 social networking system based on the common characteristics of the users (for example, users who are students of the same educational institution). For example, a first user specifically selects another private user to be a friend. Connections on the 1130 social networking system are generally in both directions, but need not be, so the terms user and friend depend on the frame of reference. The connections between users of the 1130 social networking system are generally bilateral (bidirectional) or mutual ”, but the connections can also be unilateral or unidirectional. For example, if both Bob and Joe are users of the Petition 870170009199, of 02/10/2017, p. 46/69 42/57 social network 1130 and connected to each other, Bob and Joe are connections to each other. If, on the other hand, Bob wants to connect to Joe to view the data communicated to Joe's 1130 social network system, but, Joe does not want to form a mutual connection, a one-way connection can be established. The connection between users can be a direct connection; however, some modalities of the 1130 social networking system allow the connection to be indirect through one or more levels of connections or degrees of separation. [0102] In addition to establishing and maintaining connections between users and allowing interactions between users, the 1130 social networking system provides users with the ability to take actions on various types of items supported by the 1130 social networking system. These items can include groups or networks (that is, social networks of people, entities and concepts) that users of the 1130 social network system can belong to, events or calendar entries that a user may be interested in, computer-based applications that a user can use through the 1130 social networking system, transactions that allow users to buy or sell items through services provided by or through the 1130 social networking system, and interactions with advertisements that a user can perform on and off the 1130 social networking system. just a few examples of the items through which a user can act on the 1130 social network system, and many others are possible . A user can interact with everything that has the capacity to be represented in the social network system 1130 or in the external system 1120, separated from the social network system 1130 or coupled to the social network system 1130 through the network 1150. [0103] The 1130 social networking system also has the capacity to link a variety of entities. For example, the 1130 social networking system allows users to interact with each other, as well as external 1120 systems or other entities through an API, a web service, or other channels of communication. Petition 870170009199, of 02/10/2017, p. 47/69 43/57 communication. The social network system 1130 generates and maintains the social graph that comprises a plurality of nodes interconnected by a plurality of edges. Each node in the social graph can represent an entity that can act on another node and / or that can be acted on by another node. The social graph can include several types of nodes. Examples of node types include users, non-personal entities, content items, web pages, groups, activities, messages, concepts, and anything else that can be represented by an object on the 1130 social network system. An edge between two nodes in the social graph can represent a particular type of connection or association between the two nodes, which can result from node relationships or an action that was performed by one of the nodes on the other node. In some cases, the edges between nodes can be weighted. The weight of an edge can represent an attribute associated with the edge, such as an intensity of the connection or association between nodes. Different types of edges can be provided with different weights. For example, a border created when a user likes another user can be given a weight or, while a border created when a user befriends another user can be given a different weight. [0104] As an example, when a first user identifies a second user as a friend, a border is generated in the social graph that connects a node that represents the first user and a second node that represents the second user. As multiple nodes relate to or interact with each other, the 1130 social networking system modifies the edges that connect the various nodes to reflect the relationships and interactions. [0105] The 1130 social networking system also includes user-generated content, which enhances a user's interactions with the 1130 social networking system. User-generated content can include anything that a user can add, upload, send or post to the 1130 social network system. For example, a user communicates posts to the social network system Petition 870170009199, of 02/10/2017, p. 48/69 44/57 1130 from an 1110 user device. Posts may include data, such as status updates or other text data, location information, images, such as photos, videos, links, music or other data and / or similar media . Content can also be added to the 1130 social networking system by third parties. Content items are represented as objects on the 1130 social networking system. In this way, users of the 1130 social networking system are encouraged to communicate with each other by posting text and content items from various types of media through several communication channels. Such communication increases user interaction with each other and increases the frequency with which users interact with the 1130 social networking system. [0106] The social networking system 1130 includes an 1132 web server, an API request server 1134, a user profile store 1136, a connection store 1138, an action agent 1140, an activity record 1142, an authorization server 1144, an image classification module 1146, and an image application module 1148. In one embodiment, the social networking system 1130 may include additional components, less components or different components for different applications. Other components, such as network interfaces, security mechanisms, load balancers, failover servers, management and network operations consoles, and the like, are not shown in order not to obscure system details. [0107] The 1136 user profile store maintains information about user accounts, which include biographical, demographic and other types of descriptive information, such as work experience, educational background, hobbies or preferences, location, and the like that have been declared by users or inferred by the 1130 social networking system. This information is stored in the 1136 user profile storage of Petition 870170009199, of 02/10/2017, p. 49/69 45/57 so that each user is uniquely identified. The 1130 social networking system also stores data that describes one or more connections between different users in the 1138 connection store. Connection information can indicate users who have similar or common work experiences, group memberships, hobbies or educational backgrounds. In addition, the 1130 social networking system includes user-defined connections between different users, which allow users to specify their relationships with other users. For example, user-defined connections allow users to generate relationships with other users that are parallel to users' real-life relationships, such as friends, coworkers, partners, and so on. Users can select from predefined connection types or define their own connection types as needed. Connections with other nodes in the 1130 social network system, such as non-personal entities, buckets, group centers, images, interests, pages, external systems, concepts and the like, are also stored in the 1138 connection store. [0108] The 1130 social networking system maintains data about objects with which a user can interact. To maintain this data, the user profile store 1136 and the connection store 1138 store instances of the corresponding object type maintained by the 1130 social networking system. Each type of object has information fields that are suitable for storing information suited to the type of object. For example, the 1136 user profile store contains data structures with appropriate fields to describe a user account and information related to a user account. When a new object of a particular type is created, the social networking system 1130 initializes a new data structure from the corresponding top, assigns it a unique object identifier, and begins adding data to the object Petition 870170009199, of 02/10/2017, p. 50/69 46/57 as needed. This can occur, for example, when a user becomes a user of the 1130 social network system, the 1130 social network system generates a new instance of a user profile in the 1136 user profile store, assigns an identifier to the user account. user, and begins to fill the user account fields with information provided by the user. [0109] The 1138 connection store includes data structures suitable for describing a user's connections to other users, connections to external 1120 systems or connections to other entities. The 1138 connection store can also associate a connection type with a user's connections, which can be used in conjunction with the user's privacy setting to regulate access to user information. In one embodiment, user profile storage 1136 and connection storage 1138 can be deployed as a federated database. [0110] The data stored in the connection store 1138, the user profile store 1136, and the activity log 1142 allows the social network system 1130 to generate the social graph that uses nodes to identify various objects and borders that connect nodes to identify relationships between different objects. For example, if a first user establishes a connection with a second user on the 1130 social network system, the user accounts of the first user and the second user from the 1136 user profile store can act as nodes on the social graph. The connection between the first user and the second user stored by the 1138 connection store is an edge between the nodes associated with the first user and the second user. Continuing this example, the second user can then send a message to the first user within the 1130 social network system. The action of sending the message, which can be stored, is another edge between the two nodes in the social graph representing the first user and second user. In addition, the Petition 870170009199, of 02/10/2017, p. 51/69 47/57 message can be identified and included in the social graph as another node connected to the nodes that represent the first user and the second user. [0111] In another example, a first user can tag a second user in an image that is maintained by the 1130 social networking system (or, alternatively, in an image maintained by another system outside the 1130 social networking system). The image itself can be represented as a node in the 1130 social networking system. This tagging action can create borders between the first user and the second user, as well as create a border between each user and the image, which is also a node on the social graph. In yet another example, if a user confirms attendance at an event, the user and the event are the nodes obtained from the 1136 user profile store, where participation in the event is an edge between the nodes that can be retrieved from from the activity log 1142. Through the generation and maintenance of the social graph, the 1130 social network system includes data describing many different types of objects and the interactions and connections between these objects, providing a rich source of socially relevant information. [0112] The 1132 web server links the 1130 social networking system to one or more 1110 user devices and / or one or more 1120 external systems over the 1150 network. The 1132 web server serves web pages, as well as another web-related content, such as Java, JavaScript, Flash, XML, and so on. The 1132 web server can include an email server or other messaging functionality to receive and route messages between the 1130 social networking system and one or more 1110 user devices. Messages can be instant messages, queued messages ( for example, email), text and SMS messages, or any other suitable message format. [0113] API request server 1134 allows one or more 1120 external systems and 1110 user devices to call information from Petition 870170009199, of 02/10/2017, p. 52/69 48/57 access from the 1130 social networking system by calling one or more API functions. The API request server 1134 can also allow external 1120 systems to send information to the 1130 social network system by calling the APIs. External system 1120, in one embodiment, sends an API request to the 1130 social network system. over the 1150 network, and the API request server 1134 receives the API request. The API request server 1134 processes the request by calling an API associated with the API request to generate an appropriate response, which the API request server 1134 communicates to the external system 1120 over the 1150 network. For example, in response to an API request, the API request server 1134 collects data associated with a user, such as user connections that were recorded on the external 1120 system, and communicates the collected data to the external 1120 system. In another embodiment, the device 1110 user interface communicates with the 1130 social networking system via APIs in the same way as external 1120 systems. [0114] The 1140 action agent is able to receive communications from the 1132 web server about user actions within and / or outside the 1130 social networking system. The 1140 action agent populates the 1142 activity log with information about user actions, allowing the 1130 social networking system to note various actions taken by its users within the 1130 social networking system and outside the 1130 social networking system. Any action that a particular user takes in relation to another node in the 1130 social network system can be associated to each user's account, through information kept in the 1142 activity log or in a similar database or other data repository. Examples of actions taken by a user within the 1130 social networking system that are identified and stored can include, for example, adding a connection to another user, sending a message to another user, reading a message from another user, viewing associated content to another user, Petition 870170009199, of 02/10/2017, p. 53/69 49/57 attend an event posted by another user, post an image, tenderly post an image or other actions that interact with another user or another object. When a user takes an action within the 1130 social networking system, the action is recorded in the 1142 activity log. In one embodiment, the 1130 social networking system keeps the 1142 activity log as an input database. When an action is taken within the 1130 social networking system, an entry for the action is added to the 1142 activity log. The 1142 activity log can be referred to as an action log. [0115] In addition, user actions can be associated with concepts and actions that can occur within an entity outside the 1130 social network system, such as an external 1120 system that is separate from the 1130 social network system. For example, action agent 1140 can receive data that describes a user's interaction with an external system 1120 from the 1132 web server. In this example, external system 1120 reports a user's interaction according to actions and objects structured in the graph Social. [0116] Other examples of actions in which a user interacts with an external 1120 system include a user who expresses an interest in an external 1120 system or another entity, a user who posts a comment on the 1130 social network system that discusses an external system 1120 or a web page 1122a within the external system 1120, a user who places a Uniform Resource Locator (URL) on the social network system 1130 or another identifier associated with an external system 1120, a user who attends an event associated with an external 1120 system, or any other action by a user that is related to an external 1120 system. Thus, the 1142 activity log can include actions that describe interactions between a user of the 1130 social network system and an external 1120 system that is separate from the 1130 social networking system. [0117] The 1144 authorization server imposes one or more configuration settings Petition 870170009199, of 02/10/2017, p. 54/69 50/57 privacy of users of the 1130 social networking system. A user's privacy setting determines how private information associated with a user can be shared. The privacy setting comprises specifying private information associated with a user and specifying the entity or entities with which the information can be shared. Examples of entities with which information can be shared can include other users, applications, 1120 external systems, or any entity that can potentially access the information. Information that can be shared by a user comprises user account information, such as profile photos, phone numbers associated with the user, user connections, actions taken by the user, such as adding a connection, changing user profile information , and the like. [0118] The privacy configuration specification can be provided at different levels of granularity. For example, the privacy setting can identify specific information to be shared with other users; the privacy setting identifies a specific work phone number or set of related information, such as personal information that includes a profile photo, home phone number, and status. Alternatively, the privacy setting can apply to all information associated with the user. The specification of the set of entities that can access particular information can also be specified at various levels of granularity. Various sets of entities with which information can be shared can include, for example, all user friends, all friends of friends, all applications or all 1120 external systems. One mode allows the specification of the set of entities comprises an enumeration of entities. For example, the user can provide a list of 1120 external systems that are allowed to access certain information. Petition 870170009199, of 02/10/2017, p. 55/69 51/57 Another modality allows the specification to comprise a set of entities together with exceptions that are not allowed to access the information. For example, a user can allow all external 1120 systems to access the user's job information, but specify a list of external 1120 systems that are not allowed to access job information. Certain modalities call the list of exceptions that are not allowing access to certain information from a blacklist. External 1120 systems that belong to a user-specified blacklist are prevented from accessing the information specified in the privacy setting. Various combinations of granularity of information specification, and granularity of specification of entities, with which shared information is possible. For example, all personal information can be shared with friends, whereas all work information can be shared with friends of friends. [0119] The 1144 authorization server contains logic to determine whether certain information associated with a user can be accessed by a user's friends, 1120 external systems and / or other applications and entities. The external 1120 system may need authorization from the 1144 authorization server to access the user's most private and sensitive information, such as the user's work phone number. Based on the user's privacy settings, the 1144 authorization server determines whether another user, the external system 1120, an application or other entity is allowed to access information associated with the user, including information about actions taken by the user. [0120] In the example in Figure 11, the social network system 1130 can include the image classification module 1146 and the image application module 1148, as described in more detail in this document. In one mode, the Petition 870170009199, of 02/10/2017, p. 56/69 52/57 image classification module 1146 can collect contextual cues for a set of sample images and use contextual cues to generate a set of training images. The training image set can be used to train a classifier to generate visual pattern models for an image class. The classifier can mark a set of evaluation images based on the correlation with the visual pattern models. The highest scoring images in the assessment image set can be considered most often closely related to the image class. In one embodiment, the 1146 image classification module can be deployed as the 104 image classification module. The 1148 image application module can interface with other applications to allow the search for classified images. In one embodiment, the image application module 1148 can be deployed as the image application module 106. HARDWARE IMPLANTATION [0121] Previous processes and resources can be deployed across a wide variety of machine and computer system architectures and in a wide variety of network and computing environments. Figure 12 illustrates an example of a computer system 1200 that can be used to deploy one or more of the modalities described in this document, according to one embodiment. The computer system 1200 includes sets of instructions for making the computer system 1200 perform the processes and resources discussed in this document. The computer system 1200 can be connected (for example, on a network) to other machines. In a network deployment, the computer system 1200 can operate at the capacity of a server machine or a client machine in a client-server network environment, or as a network machine in a peer-to-peer network environment ( or distributed). In one embodiment, the computer system 1200 can be the system Petition 870170009199, of 02/10/2017, p. 57/69 53/57 of social network 1130, user device 1110 and external system 1120 or a component thereof. In one embodiment, the computer system 1200 can be a server among many that make up all or part of the 1130 social network system. [0122] The computer system 1200 includes a processor 1202, a cache 1204, and one or more modules and executable drivers, stored in a computer-readable medium, directed to the processes and resources described in this document. In addition, the computer system 1200 includes a high performance input / output (I / O) bus 1206 and a standard I / O bus 1212. A host bridge 1210 couples processor 1202 to the I / O bus high performance 1206, considering that the I / O bus bridge 1212 couples the two buses 1206 and 1212 to each other. A system memory 1214 and a network interface 1216 are coupled to the high performance I / O bus 1206. Computer system 1200 may additionally include video memory and a display device coupled with video memory (not shown). Mass storage 1218 and I / O ports 1220 connect to the standard 1212 I / O bus. The computer system 1200 can optionally include a keyboard and pointing device, a display device or other input / output devices ( not shown) coupled to the standard 1212 I / O bus. Collectively, these elements are intended to represent a broad category of computer hardware systems, which include, but are not limited to, computer systems based on compatible processors. x 126 manufactured with Intel Corporation of Santa Clara, California, and x 126 compatible processors manufactured with Advanced Micro Devices (AMD), Inc., of Sunnyvale, California, as well as any other suitable processor. [0123] An operating system manages and controls the operation of the Petition 870170009199, of 02/10/2017, p. 58/69 54/57 computer 1200, including input and output of data to and from software applications (not shown). The operating system provides an interface between the software applications that run on the system and the hardware components of the system. Any suitable operating system can be used, such as the LINUX Operating System, the Apple Macintosh Operating System, available from Apple Computer Inc. of Cupertino, California, UNIX operating systems, Microsoft® Windows® operating systems, BSD operating systems, and the like . Other deployments are possible. [0124] The elements of the 1200 computer system are described in more detail below. In particular, network interface 1216 provides communication between computer system 1200 and any one of a wide range of networks, such as an Ethernet network (for example, IEEE 1202.3), a backplane, etc. The mass storage 1218 provides permanent storage for the data and programming instructions to carry out the processes and resources described above implemented by the respective computing systems identified above, whereas the system memory 1214 (for example, DRAM) provides temporary storage for the data and programming instructions when executed by processor 1202. I / O ports 1220 can be one or more serial and / or parallel communication ports that provide communication between additional peripheral devices, which can be coupled to the 1200 computer system. [0125] The computer system 1200 can include a variety of system architectures, and various components of the computer system 1200 can be rearranged. For example, cache 1204 can be on the chip with processor 1202. Alternatively, cache 1204 and processor 1202 can be packaged together as a processor module, with processor 1202 being referred to as the processor core. Besides that, Petition 870170009199, of 02/10/2017, p. 59/69 55/57 Certain modalities may not require or include all of the above components. For example, peripheral devices attached to the standard 1212 I / O bus can connect to the high performance I / O bus 1206. In addition, in some embodiments, there may be only one, with the 1200 computer system components are coupled to the single bus. In addition, computer system 1200 may include additional components, such as processors, storage devices or additional memories. [0126] In general, the processes and resources described in this document can be implemented as part of an operating system or a specific application, component, program, object, module or series of instructions referred to as programs. For example, one or more programs can be used to perform specific processes described in this document. The programs typically comprise one or more instructions on various memory and storage devices in the computer system 1200 which, when read and executed by one or more processors, cause the computer system 1200 to perform operations to perform the processes and resources described in this document. The processes and resources described in this document can be implemented in software, firmware, hardware (for example, an application-specific integrated circuit) or any combination thereof [0127] In a deployment, the processes and resources described in this document are implemented as a series of executable modules executed through the 1200 computer system, individually or collectively in a distributed computing environment. The previous modules can be made by hardware, executable modules stored in a computer-readable medium (or machine-readable medium), or a combination of both. For example, modules can comprise a plurality or series of instructions to be executed by a processor in a hardware system, such as the Petition 870170009199, of 02/10/2017, p. 60/69 56/57 processor 1202. Initially, the instruction series can be stored on a storage device, such as mass storage 1218. However, the instruction series can be stored on any suitable computer-readable storage medium. In addition, the instruction series does not need to be stored locally, and can be received from a remote storage device, such as a server on a network, via the 1216 network interface. The instructions are copied from the device. storage, such as mass storage 1218, in system memory 1214 and then accessed and executed by processor 1202. In several deployments, a module or modules can be performed by one processor or multiple processors in one or multiple locations, such as multiple servers in a parallel processing environment. [0128] Examples of computer-readable media include, but are not limited to, recordable type media, such as volatile and non-volatile memory devices; solid state memories; floppy and other removable disks; hard disk drives; magnetic media; optical discs (for example, Compact Disc - Read Only Memory (CD ROMS), Versatile Digital Discs (DVDs)); another similar non-transitory (or transitory), tangible (or non-tangible) storage medium; or any type of medium suitable for storing, encoding or transmitting a series of instructions for execution by means of the computer system 1200 to perform any one or more of the processes and resources described in this document. [0129] For purposes of explanation, numerous specific details are presented in order to provide a complete understanding of the description. However, it will be evident to someone skilled in the art that the modalities of disclosure can be practiced without these specific details. In some cases, modules, structures, processes, resources and devices are shown in the form of a block diagram in order to avoid obscuring the description. In other cases, Petition 870170009199, of 02/10/2017, p. 61/69 57/57 function block diagrams and flow diagrams are shown to represent data and logic flows. The components of the block diagrams and flow diagrams (for example, modules, blocks, structures, devices, resources, etc.) can be combined, separated, removed, reordered and replaced in a different way in a different way than expressly described and shown in this document. [0130] The reference in this specification to one (1) modality, one modality, some modalities, several modalities, certain modalities, other modalities, a series of modalities or similar, means that a particular feature, design, structure or characteristic described in connection with the mode is included in at least one disclosure mode. Not all appearances, for example, of the expression in one (1) modality or in a modality in several places in the specification necessarily refer to the same modality, nor are they separate modalities or alternatives mutually exclusive from other modalities. In addition, whether or not there is an express reference to a modality or similar, several resources are described, which can be combined and included in a variety of ways in some modalities, but also omitted in a varied way in other modalities. Similarly, several features are described that may be preferences or requirements for some modalities, but not for other modalities. [0131] The language used in this document has been selected primarily for instructional and readability purposes, and may not have been selected to portray or limit the inventive subject. Therefore, it is intended that the scope is limited not by this detailed description, but preferably by any claims that result in a claim based on them. Consequently, the disclosure of the modalities is intended to be illustrative, however, not limiting the scope that is established in the following claims.
权利要求:
Claims (20) [1] 1 .System CHARACTERIZED by the fact that it comprises: at least one processor; and a memory that stores instructions configured to instruct at least one processor to perform: receiving a set of sample images, each image in the sample set is associated with one or more social cues; mark the correlation of each image in the sample set with an image class based on one or more social indications associated with the image; and determining a set of training images to train a classifier from the sample set based on the markup. [2] 2. System, according to claim 1, CHARACTERIZED by the fact that it additionally comprises specifying the image class. [3] 3. System, according to claim 1, CHARACTERIZED by the fact that the determination comprises classifying each image in the set of sample images based on the markup. [4] 4. System, according to claim 1, CHARACTERIZED by the fact that the determination comprises selecting a subset of upper marking from the set of sample images. [5] 5. System, according to claim 4, CHARACTERIZED by the fact that the upper marking subset is the set of training images. [6] 6. System, according to claim 1, CHARACTERIZED by the fact that it additionally comprises training the classifier based on the set of training images. [7] 7. System, according to claim 1, CHARACTERIZED by the fact Petition 870170009199, of 02/10/2017, p. 63/69 2/4 of which additionally comprises generating a visual pattern model associated with the image class. [8] 8. System, according to claim 1, CHARACTERIZED by the fact that the classifier is configured to use the histogram of visual word image classification technique or a neural network image classification technique. [9] 9. System, according to claim 1, CHARACTERIZED by the fact that it additionally comprises determining an extent to which a set of evaluation images correlates with the image class. [10] 10. System, according to claim 9, CHARACTERIZED by the fact that the set of evaluation images is different from the set of sample images. [11] 11. System, according to claim 9, CHARACTERIZED by the fact that the set of evaluation images comprises a set of images larger than the set of sample images. [12] 12. System, according to claim 9, CHARACTERIZED by the fact that it additionally comprises marking the correlation of each image in the set of evaluation images with a visual pattern model associated with the image class. [13] 13. System, according to claim 12, CHARACTERIZED by the fact that it additionally comprises classifying each image in the set of evaluations based on the correlation of the marking of each image in the set of evaluation images. [14] 14. System, according to claim 12, CHARACTERIZED by the fact that it additionally comprises associating a subset of the upper mark of the set of evaluation images to the image class. [15] 15. System, according to claim 1, CHARACTERIZED by the fact Petition 870170009199, of 02/10/2017, p. 64/69 3/4 that one or more social statements comprise one or more image tags. [16] 16. The system according to claim 15, CHARACTERIZED by the fact that it further comprises determining a number of instances of a particular image tag among a total number of the one or more image tags associated with an image. [17] 17. System, according to claim 1, CHARACTERIZED by the fact that the or more social indications comprise one or more of: location data associated with an image from the set of sample images; or an image uploader identity, a bookmark, or an image owner from the sample image set. [18] 18. System, according to claim 1, CHARACTERIZED by the fact that one or more social referrals are received by a social network system. [19] 19. Computer implanted method CHARACTERIZED by the fact that it comprises: receive, through a computer system, a set of sample images, each image in the sample set is associated with one or more social indications; mark, by means of the computer system, the correlation of each image in the sample set to an image class based on one or more social indications associated with the image; and determining, through the computer system, a set of training images to train a classifier of the sample set based on the markup. [20] 20. Computer storage medium FEATURED by the fact that it stores executable instructions by computer that, when executed, Petition 870170009199, of 02/10/2017, p. 65/69 4/4 cause a computer system to perform a computer-implemented method that comprises: receiving a set of sample images, each image in the sample set is associated with one or more social cues; mark the correlation of each image in the sample set with an image class based on one or more social indications associated with the image; and determining a set of training images to train a sample set classifier based on marking.
类似技术:
公开号 | 公开日 | 专利标题 US20190279053A1|2019-09-12|Systems and methods for image classification by correlating contextual cues with images US9727803B2|2017-08-08|Systems and methods for image object recognition based on location information and object categories US10698945B2|2020-06-30|Systems and methods to predict hashtags for content items US20140337341A1|2014-11-13|Auto-Tagging In Geo-Social Networking System BR112014000615B1|2021-07-13|METHOD TO SELECT VISUAL CONTENT EDITING FUNCTIONS, METHOD TO ADJUST VISUAL CONTENT, AND SYSTEM TO PROVIDE A PLURALITY OF VISUAL CONTENT EDITING FUNCTIONS TW201503673A|2015-01-16|Photo and video search US10796233B2|2020-10-06|Systems and methods for suggesting content US10225250B2|2019-03-05|Systems and methods for providing dynamically selected media content items IL263490D0|2019-01-31|Combining faces from source images with target images based on search queries US20180012236A1|2018-01-11|Systems and methods for analyzing interaction-bait content based on classifier models US10298655B2|2019-05-21|Systems and methods for providing content to verified entities US20190215568A1|2019-07-11|Systems and methods for ranking and providing related media content based on signals US10630632B2|2020-04-21|Systems and methods for ranking comments US10445558B2|2019-10-15|Systems and methods for determining users associated with devices based on facial recognition of images US20200004776A1|2020-01-02|Systems and methods for processing media content that depict objects US20170161280A1|2017-06-08|Systems and methods to determine location of media items US20180189260A1|2018-07-05|Systems and methods for suggesting content US10460171B2|2019-10-29|Systems and methods for processing media content that depict objects US20160179964A1|2016-06-23|Systems and methods for providing narratives based on selected content EP2835748A1|2015-02-11|Systems and methods for image classification by correlating contextual cues with images US20160078035A1|2016-03-17|Systems and methods for providing real-time content items associated with topics US10419383B2|2019-09-17|Systems and methods for ranking comments based on interaction-to-impression ratio US10685188B1|2020-06-16|Systems and methods for training machine learning models for language clusters US10270772B2|2019-04-23|Systems and methods for providing content to verified entities US20170185235A1|2017-06-29|Systems and methods for selecting previews for presentation during media navigation
同族专利:
公开号 | 公开日 KR102244748B1|2021-04-28| JP6612229B2|2019-11-27| JP2016527646A|2016-09-08| WO2015020691A1|2015-02-12| US10169686B2|2019-01-01| KR20160040633A|2016-04-14| US20150036919A1|2015-02-05| AU2014304803A1|2016-02-25| US20190279053A1|2019-09-12| MX2016001687A|2016-09-06| CN105612514B|2020-07-21| AU2014304803B2|2019-07-04| CN105612514A|2016-05-25| MX367510B|2019-08-26| IL243859D0|2016-04-21| CA2920193A1|2015-02-12|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US5909510A|1997-05-19|1999-06-01|Xerox Corporation|Method and apparatus for document classification from degraded images| US7296032B1|2001-05-17|2007-11-13|Fotiva, Inc.|Digital media organization and access| US7941009B2|2003-04-08|2011-05-10|The Penn State Research Foundation|Real-time computerized annotation of pictures| US9715542B2|2005-08-03|2017-07-25|Search Engine Technologies, Llc|Systems for and methods of finding relevant documents by analyzing tags| US7949186B2|2006-03-15|2011-05-24|Massachusetts Institute Of Technology|Pyramid match kernel and related techniques| US8024343B2|2006-04-07|2011-09-20|Eastman Kodak Company|Identifying unique objects in multiple image collections| US9507778B2|2006-05-19|2016-11-29|Yahoo! Inc.|Summarization of media object collections| US7877384B2|2007-03-01|2011-01-25|Microsoft Corporation|Scoring relevance of a document based on image text| US8880529B2|2007-05-15|2014-11-04|Tivo Inc.|Hierarchical tags with community-based ratings| US20090083332A1|2007-09-21|2009-03-26|The Penn State Research Foundation|Tagging over time: real-world image annotation by lightweight metalearning| JP5018614B2|2008-04-16|2012-09-05|国立大学法人電気通信大学|Image processing method, program for executing the method, storage medium, imaging device, and image processing system| KR101582142B1|2008-06-06|2016-01-05|톰슨 라이센싱|System and method for similarity search of images| US8385971B2|2008-08-19|2013-02-26|Digimarc Corporation|Methods and systems for content processing| US20100271297A1|2009-04-27|2010-10-28|Shoei-Lai Chen|Non-contact touchpad apparatus and method for operating the same| US8478052B1|2009-07-17|2013-07-02|Google Inc.|Image classification| US8370282B1|2009-07-22|2013-02-05|Google Inc.|Image quality measures| US20110188742A1|2010-02-02|2011-08-04|Jie Yu|Recommending user image to social network groups| US8401282B2|2010-03-26|2013-03-19|Mitsubishi Electric Research Laboratories, Inc.|Method for training multi-class classifiers with active selection and binary feedback| US20120106854A1|2010-10-28|2012-05-03|Feng Tang|Event classification of images from fusion of classifier classifications| US9037600B1|2011-01-28|2015-05-19|Yahoo! Inc.|Any-image labeling engine| US9218364B1|2011-01-28|2015-12-22|Yahoo! Inc.|Monitoring an any-image labeling engine| US8774515B2|2011-04-20|2014-07-08|Xerox Corporation|Learning structured prediction models for interactive image labeling| US20130007667A1|2011-06-28|2013-01-03|Microsoft Corporation|People centric, cross service, content discovery system| US9646226B2|2013-04-16|2017-05-09|The Penn State Research Foundation|Instance-weighted mixture modeling to enhance training collections for image annotation|CN104751198B|2013-12-27|2018-04-27|华为技术有限公司|The recognition methods of object in image and device| US10223727B2|2014-10-20|2019-03-05|Oath Inc.|E-commerce recommendation system and method| CN104899872B|2015-05-18|2017-11-03|北京大学|Image vision significance computational methods and device based on explicit and implicit information| US10534810B1|2015-05-21|2020-01-14|Google Llc|Computerized systems and methods for enriching a knowledge base for search queries| US9906704B2|2015-09-17|2018-02-27|Qualcomm Incorporated|Managing crowd sourced photography in a wireless network| US10878286B2|2016-02-24|2020-12-29|Nec Corporation|Learning device, learning method, and recording medium| CN105761269A|2016-03-14|2016-07-13|北京大学|Image salient object detection method based on multiscale discrimination subspaces| US9892326B2|2016-03-31|2018-02-13|International Business Machines Corporation|Object detection in crowded scenes using context-driven label propagation| US11237857B2|2016-07-07|2022-02-01|Data Accelerator Ltd|Method and system for application virtualization that includes machine learning| US20180181451A1|2016-07-07|2018-06-28|Data Accelerator Ltd|Method and system for application virtualization that includes resource access control| US10645142B2|2016-09-20|2020-05-05|Facebook, Inc.|Video keyframes display on online social networks| US10083379B2|2016-09-27|2018-09-25|Facebook, Inc.|Training image-recognition systems based on search queries on online social networks| US10026021B2|2016-09-27|2018-07-17|Facebook, Inc.|Training image-recognition systems using a joint embedding model on online social networks| EP3306555A1|2016-10-10|2018-04-11|Facebook, Inc.|Diversifying media search results on online social networks| US11200273B2|2016-10-16|2021-12-14|Ebay Inc.|Parallel prediction of multiple image aspects| WO2018094496A1|2016-11-23|2018-05-31|Primal Fusion Inc.|System and method for using a knowledge representation with a machine learning classifier| CN106650795B|2016-12-01|2020-06-12|携程计算机技术(上海)有限公司|Hotel room type image sorting method| US10803013B2|2017-02-10|2020-10-13|Smugmug, Inc.|Efficient similarity detection| US20190005043A1|2017-06-29|2019-01-03|Adobe Systems Incorporated|Automated Digital Asset Tagging using Multiple Vocabulary Sets| CN107463953B|2017-07-21|2019-11-19|上海媒智科技有限公司|Image classification method and system based on quality insertion in the noisy situation of label| CN109426834A|2017-08-31|2019-03-05|佳能株式会社|Information processing unit, information processing method and information processing system| US10909429B2|2017-09-27|2021-02-02|Monotype Imaging Inc.|Using attributes for identifying imagery for selection| CN107679183B|2017-09-29|2020-11-06|百度在线网络技术(北京)有限公司|Training data acquisition method and device for classifier, server and storage medium| KR102074654B1|2017-11-03|2020-02-07|카페24 주식회사|Method, Apparatus and System for Editing Shopping Mall Webpage| US10140553B1|2018-03-08|2018-11-27|Capital One Services, Llc|Machine learning artificial intelligence system for identifying vehicles| CN110351180A|2018-04-03|2019-10-18|鸿富锦精密电子有限公司|Internet of Things information management system| JP2019220116A|2018-06-22|2019-12-26|日立造船株式会社|Information processing device, determination method, and object determination program| US10635940B1|2018-12-11|2020-04-28|Capital One Services, Llc|Systems and methods for updating image recognition models| JP2020123139A|2019-01-30|2020-08-13|キヤノン株式会社|Information processing system, terminal device, client device, control method thereof, program, and storage medium| EP3935576A1|2019-03-06|2022-01-12|Telepathy Labs, Inc.|Method and system for assisting a developer in improving an accuracy of a classifier| WO2021081741A1|2019-10-29|2021-05-06|深圳大学|Image classification method and system employing multi-relationship social network| US11184445B2|2020-09-27|2021-11-23|Jean-Michel Michel Cloutier|System and method of establishing communication between users| JP2021082319A|2021-02-03|2021-05-27|プライマル フュージョン インコーポレイテッド|System and method for using knowledge representation with machine learning classifier| CN113822252A|2021-11-24|2021-12-21|杭州迪英加科技有限公司|Pathological image cell robust detection method under microscope|
法律状态:
2019-12-03| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2020-03-03| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2020-09-01| B11B| Dismissal acc. art. 36, par 1 of ipl - no reply within 90 days to fullfil the necessary requirements| 2021-10-13| B350| Update of information on the portal [chapter 15.35 patent gazette]|
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US13/959,446|US10169686B2|2013-08-05|2013-08-05|Systems and methods for image classification by correlating contextual cues with images| PCT/US2014/015887|WO2015020691A1|2013-08-05|2014-02-11|Systems and methods for image classification by correlating contextual cues with images| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|